Thought acceleration effect of self-derived LLMs
I feel like I summarized it, but you didn't?
Since most texts in the world are written in "expressions that many people can read and understand", whereas my research notes are written in "expressions that I can understand", an AI that RAGs with the latter accelerates my personal thinking much more efficiently than ChatGPT In my research notes I don't write explanations for words I know, so the AI that reads them doesn't write explanations for what I know either. Concepts are tools for the economy of thought. So it is more efficient to use them without explanation in one's thinking. It is useful to use ChatGPT when explaining it to others
The omni's output is closer to my personal water surface (the boundary of what is not yet verbalized), so it is more effective in supporting verbalization.
Maybe it has to do with the different characteristics of blogs and Scrapbox: in Scrapbox, instead of explaining the same concept over and over again, you create a page for that concept and link to it.
human.iconI feel that the style of writing is a major factor in the speed of comprehension, and I have the impression that a sentence that is easier for me to read is generated when I give my own sentence to GPT to generate, rather than just having GPT generate the sentence for me.
nishio.iconIndeed, it may be that OMNI is easier for me to read because it speaks in my style. In the end, it may be that AI assistants are more productive for each individual if they are personalized to the individual.
Feelings differ depending on whether the data is primarily self-inflicted or not When it's self-originated, it's like AI and humans are driving the thinking as a unified entity. I feel like I'm accelerating."
Feel like you're "seeing things from a different perspective."
When derived from others, the feeling of "Oh, so this is what Mr. X said about this subject..."
I feel like I "found someone else's statement."
Fragments derived from oneself are reconstructed once chewed up inside oneself, so there is a sense of smooth connection. Fragments derived from others still have a hard surface.
Is it because I'm naive enough to pass off things of other people's origin as "that's one way of thinking" even if it differs from my current opinion?
The "it's normal for others to have different opinions" runaround?
If your self-derived opinion differs from your current opinion, since you are both you, "Why do we disagree?" Since they are both me, does this strongly trigger the question, "Why do we have different opinions?
---
This page is auto-translated from /nishio/自分由来LLMの思考加速効果 using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I'm very happy to spread my thought to non-Japanese readers.